68 research outputs found

    An App Performance Optimization Advisor for Mobile Device App Marketplaces

    Full text link
    On mobile phones, users and developers use apps official marketplaces serving as repositories of apps. The Google Play Store and Apple Store are the official marketplaces of Android and Apple products which offer more than a million apps. Although both repositories offer description of apps, information concerning performance is not available. Due to the constrained hardware of mobile devices, users and developers have to meticulously manage the resources available and they should be given access to performance information about apps. Even if this information was available, the selection of apps would still depend on user preferences and it would require a huge cognitive effort to make optimal decisions. Considering this fact we propose APOA, a recommendation system which can be implemented in any marketplace for helping users and developers to compare apps in terms of performance. APOA uses as input metric values of apps and a set of metrics to optimize. It solves an optimization problem and it generates optimal sets of apps for different user's context. We show how APOA works over an Android case study. Out of 140 apps, we define typical usage scenarios and we collect measurements of power, CPU, memory, and network usages to demonstrate the benefit of using APOA.Comment: 18 pages, 8 figure

    Evidence-based Software Process Recovery

    Get PDF
    Developing a large software system involves many complicated, varied, and inter-dependent tasks, and these tasks are typically implemented using a combination of defined processes, semi-automated tools, and ad hoc practices. Stakeholders in the development process --- including software developers, managers, and customers --- often want to be able to track the actual practices being employed within a project. For example, a customer may wish to be sure that the process is ISO 9000 compliant, a manager may wish to track the amount of testing that has been done in the current iteration, and a developer may wish to determine who has recently been working on a subsystem that has had several major bugs appear in it. However, extracting the software development processes from an existing project is expensive if one must rely upon manual inspection of artifacts and interviews of developers and their managers. Previously, researchers have suggested the live observation and instrumentation of a project to allow for more measurement, but this is costly, invasive, and also requires a live running project. In this work, we propose an approach that we call software process recovery that is based on after-the-fact analysis of various kinds of software development artifacts. We use a variety of supervised and unsupervised techniques from machine learning, topic analysis, natural language processing, and statistics on software repositories such as version control systems, bug trackers, and mailing list archives. We show how we can combine all of these methods to recover process signals that we map back to software development processes such as the Unified Process. The Unified Process has been visualized using a time-line view that shows effort per parallel discipline occurring across time. This visualization is called the Unified Process diagram. We use this diagram as inspiration to produce Recovered Unified Process Views (RUPV) that are a concrete version of this theoretical Unified Process diagram. We then validate these methods using case studies of multiple open source software systems

    Amazing Grace: How Sweet the Sound of Synthesised Bagpipes

    Get PDF
    A bagpipe is a type of wind instrument that contains a melody pipe, which has an enclosed reed called the chanter and other drone pipes. The chanter is the part of the bagpipe that supplies the note, and the air that the pipes are fed is provided by the bag, which is inflated by a blowpipe and driven by the player’s arm. The goal of this project was to create a bagpipe using a program called Supercollider. Supercollider is used for audio synthesis. While creating this artificial bagpipe (here on referred to as a ‘synth’), it was broken down into four components: the chanter, the base drone, the first tenor drone and the second tenor drone. The chanter has the frequency of the note, the base drone’s frequency will be half that of the chanter and the frequency of the tenor drone will be half that of the base drone. This is because of the length of the pipes in relation to each other. In order to create the synth, a sine oscillator was used, and then put through a resonance filter, and then a reverb filter. This was done in order to mimic the echo that sound has when it is forced through a tube, or enclosed space. All four pipes were added together to create the synth. In order to play a song, the synth was put into a pattern so Supercollider could receive an array of notes, which serve as the frequency of the chanter, and then play the song automatically. The notes for Amazing Grace were transcribed into midi-notes and beat durations and these arrays were fed into the pattern to create the song. The synthetic version of Amazing Grace, in terms of frequency and loudness, was then graphed and compared to the graph of a recording of Amazing Grace played on a real bagpipe. There are differences between the two sound files, the most significant being that the real bagpipe has much more variation in terms of loudness. The synthesized bagpipe had a more gradual and subdued noise level, where the natural bagpipe was much more randomized. Taking the comparisons into consideration, Supercollider can be used to create an approximation of a bagpipe, but under scrutiny, the artificial version currently falls short. &nbsp

    ISOLATED INSTRUMENT TRANSCRIPTION USING A DEEP BELIEF NETWORK

    Get PDF
    ABSTRACT Automatic music transcription is a difficult task that has provoked extensive research on transcription systems that are predominantly general purpose, processing any number or type of instruments sounding simultaneously. This paper presents a polyphonic transcription system that is constrained to processing the output of a single instrument with an upper bound on polyphony. For example, a guitar has six strings and is limited to producing six notes simultaneously. The transcription system consists of a novel pitch estimation algorithm that uses a deep belief network and multi-label learning techniques to generate multiple pitch estimates for each audio analysis frame, such that the polyphony does not exceed that of the instrument. The implemented transcription system is evaluated on a compiled dataset of synthesized guitar recordings. Comparing these results to a prior single-instrument polyphonic transcription system that received exceptional results, this paper demonstrates the effectiveness of deep, multi-label learning for the task of polyphonic transcription
    • …
    corecore